home
***
CD-ROM
|
disk
|
FTP
|
other
***
search
/
SGI Varsity Update 1998 August
/
SGI Varsity Update 1998 August.iso
/
docs
/
relnotes
/
mpt
/
ch2.z
/
ch2
Wrap
Text File
|
1998-07-29
|
10KB
|
306 lines
Chapter 2. New Features
This chapter describes the new features of the MPI, PVM, and SHMEM products
contained in the Cray Message Passing Toolkit and Message Passing Toolkit
for IRIX (MPT), releases 1.2, 1.2.0.2, and 1.2.1. The newest features, those
of release 1.2.1, are described first.
2.1 MPT Release 1.2.1 Features
The main features of the MPT 1.2.1 release are as follows:
* Support for checkpoint and restart of a limited class of MPI
applications on IRIX systems
* Support for Miser submission of MPI jobs on a single IRIX system
* Addition of IGET and IPUT functions in the libsma library for Cray PVP
systems
* Addition of MPI_Type_get_envelope and MPI_Type_get_contents functions
on UNICOS/mk systems
* New functions available on IRIX systems
The following sections describe these features.
2.1.1 Checkpoint and Restart of MPI Jobs
MPT release 1.2.1 supports checkpoint and restart of MPI jobs that consist
of a single executable file on IRIX systems. Jobs such as the one in the
following example can be checkpointed, provided all of the objects can be
checkpointed:
mpirun -np n ./a.out
In this release, jobs that consist of more than one executable file cannot
be checkpointed (for example, mpirun 2 ./a.out : 2 ./b.out). For more
information, see the -cpr option on the mpirun(1) command.
2.1.2 MPI and Miser
Miser is a job scheduling feature in the IRIX 6.5 release. MPT release 1.2.1
supports Miser submission of MPI jobs that run on a single IRIX system. For
more information, see the -miser option on the mpirun(1) command.
2.1.3 IGET and IPUT Functions
The SHMEM IGET and IPUT strided copy functions have been added to the libsma
library for Cray PVP systems.
2.1.4 MPI_Type_get_envelope and MPI_Type_get_contents Functions
The MPI_Type_get_envelope and MPI_Type_get_contents functions, defined in
the MPI-2 standard, have been added for UNICOS/mk systems. These functions
query for information about the internal structure of an MPI derived data
type.
2.1.5 New Functions for IRIX Systems
The shmem_set_lock(3), shmem_clear_lock(3), and shmem_test_lock(3) functions
are now available on IRIX systems.
2.2 MPT Release 1.2.0.2 Features
The MPT 1.2.0.2 update release provides compile-time interface checking for
Fortran SHMEM subroutine calls. A SHMEM interface definition module called
shmem_interface is added to permit Fortran 90 programmers to verify
correctness of SHMEM subroutine calls at compile time. This feature is
available on IRIX systems only.
To activate interface checking, specify the -auto_use shmem_interface option
on the f90 command line, as in the following example:
f90 -64 -auto_use shmem_interface -LANG:recursive=on program.f -lsma
This feature is dependent on Fortran 90 compiler release 7.2.1 or higher.
Previous versions cannot process the shmem_interface definition module nor
can they process the -auto_use command line option. For installations in
which MPT 1.2.0.2 is installed in an alternate location, this feature is
also dependent on the mpt and MIPSpro module files from the Modules Software
version 2.2.1.1 package. To determine if your Modules Software is of the
proper level, enter the following command:
versions modules
2.3 MPT Release 1.2 Features
The main features of the MPT 1.2 release are as follows:
* MPI for UNICOS systems is now based on the Silicon Graphics proprietary
implementation rather than on the MPICH implementation. The new
implementation uses Array Services software, which is included in the
required UNICOS versions for this release. For software requirements,
see Section 6.1.
* Additional SHMEM functionality for IRIX systems.
* Common documentation set for MPT on all platforms.
* Bugfixes for all products.
2.3.1 MPI Mixed Communication Modes for UNICOS Systems
New functionality allows an MPI application to run between two machines by
using TCP/IP for communication, while the parts of the MPI application that
run within each machine can use the Cray Research optimized shared memory
mode for communication.
2.3.2 New MPI Environment Variable for IRIX and UNICOS .login Files
The MPI_ENVIRONMENT environment variable has been added to allow users to
create the login session as part of an MPI job.
2.3.3 MPI Support for HIPPI Bypass on IRIX Systems
MPI now uses the HIPPI bypass protocol for jobs that use up to 64 processors
on each system in a cluster. In previous releases, HIPPI bypass was
restricted to jobs that used 32 processors or less.
2.3.4 MPI Buffering Restrictions Eliminated on IRIX and UNICOS Systems
MPI now buffers messages of all sizes. In previous IRIX releases, buffering
was restricted to messages of 16 kilobytes or less. As a result of lifting
this restriction, HIPPI bypass is now used for messages of all sizes on IRIX
systems. In previous UNICOS releases, no buffering was done.
2.3.5 MPI Handling of stdout/stderr Messages on IRIX and UNICOS Systems
All messages for stdout and stderr are now displayed on MPI process 0,
regardless of which MPI process originated the message. A new environment
variable, MPI_PREFIX, allows users to display a prefix for each message. For
example, the user can specify that each message be prefixed with the MPI
process number from which it originated.
2.3.6 MPI Support for Third Party Products on IRIX Systems
MPI has been modified to allow third party products to be developed for use
with MPI on IRIX systems. The following products are available from the
respective organizations (URLs are included for more information):
* LSF from Platform Computing (URL http://www.platform.com)
* Dolphin TotalView from Dolphin Interconnect Solutions, Inc. (URL
http://www.dolphinics.com/tw/tvover.htm)
* ROMIO from Argonne National Laboratory (URL
http://www.mcs.anl.gov/home/thakur/romio)
2.3.7 MPI Support for CRAY T3E-900 System Optimization
MPI has been modified to automatically disable the streams coherency
workaround if MPI is running on a CRAY T3E-900 system. This feature allows
access to existing functionality that can help improve communication
bandwidth. For example, with streams enabled, the MPI_BUFFER_MAX environment
variable can now be set to 0 on CRAY T3E-900 systems to tell MPI to disable
internal buffering. Previously, this environment variable was ignored unless
streams were disabled. This support was also included in the MPT 1.1.0.4
release.
2.3.8 New mpirun(1) Command Options
New mpirun(1) command options for IRIX systems are as follows:
* -array
* -help
* -prefix
* -verbose
* -nt
New mpirun(1) command options for UNICOS and UNICOS/mk include the multihost
entries with the following options for the hosts:
* -file
* -np
* -nt
For descriptions of these options, see the mpirun(1) man page.
2.3.9 PVM Use of POSIX Shared Memory on IRIX Systems
PVM now uses POSIX shared memory rather than IRIX shared arenas for
communication within a system. This results in improved robustness and
performance.
2.3.10 Alternate Remote Shell Command for PVM
Users can now override the default remote shell command used when launching
slave PVM daemons. At startup, the master PVM daemon checks for the presence
of the PVM_RSH environment variable. When present, this value is used for
the remote shell command. This feature is supported on all platforms.
2.3.11 New SHMEM Space Allocation Functions for IRIX Platforms
The following functions have been added to SHMEM for IRIX platforms:
shmalloc(3C)
Allocates a block of memory
shfree(3C)
Deallocates a block of memory
shrealloc(3C)
Changes the size of a block of memory
SHPALLOC(3F)
Allocates a block of memory
SHPCLMOVE(3F)
Extends a block or copies the contents of a block into a larger block
SHPDEALLC(3F)
Returns a block of memory to the symmetric heap
For syntax and details of these functions, see the man pages.
2.3.12 SHMEM Job Placement Environment Variables for IRIX Platforms
The following environment variables have been added to control SHMEM job
placement on IRIX systems:
Variable
Description
SMA_DSM_OFF
If set to any value, this variable deactivates processor-memory
affinity control. SHMEM processes will then run on any available
processor, regardless of whether it is near the memory associated with
that process.
SMA_DSM_VERBOSE
When set to any value, this variable causes information about process
and memory placement to be printed to stderr.
SMA_DSM_PPM
When set to an integer value, this variable specifies the number of
processors to be mapped to every memory. The default is 2.
SMA_DSM_TOPOLOGY
This variable specifies the shape of the set of hardware nodes on which
the PE memories are allocated. This variable can be set to any of the
following values:
free
cube
cube_fixed
The default is free.
PAGESIZE_DATA
This variable can be set to an integer value that specifies the desired
page size in kilobytes for program data areas. Supported values include
16, 64, 256, and 1024.
SMA_SYMMETRIC_SIZE
This variable specifies the size in bytes of symmetric memory. This is
the size of static space plus per-PE symmetric heap size.
2.3.13 New SHMEM Supercomputing API Routines for IRIX Systems
The following supercomputing application programming interface (API)
routines have been added to SHMEM:
* shmem_broadcast32(3)
* shmem_broadcast64(3)
* shmem_fcollect32(3)
* shmem_fcollect64(3)
* shmem_collect32(3)
* shmem_collect64(3)
* shmem_barrier(3)
For more information about these routines, see the man pages.
2.3.14 Fortran Character Support in MPI for UNICOS/mk Systems
Fortran character support has been added so that messages that involve
character data can be sent. This support was also included in the MPT
1.1.0.4 release.
COPYRIGHT(c) 1998 Cray Research, Inc.